6 research outputs found

    GAIT Technology for Human Recognition using CNN

    Get PDF
    Gait is a distinctive biometric characteristic that can be detected from a distance; as a result, it has several uses in social security, forensic identification, and crime prevention. Existing gait identification techniques use a gait template, which makes it difficult to keep temporal information, or a gait sequence, which maintains pointless sequential limitations and loses the ability to portray a gait. Our technique, which is based on this deep set viewpoint, is immune to frame permutations and can seamlessly combine frames from many videos that were taken in various contexts, such as diversified watching, angles, various outfits, or various situations for transporting something. According to experiments, our single-model strategy obtains an average rank-1 accuracy of 96.1% on the CASIA-B gait dataset and an accuracy of 87.9% on the OU-MVLP gait dataset when used under typical walking conditions. Our model also demonstrates a great degree of robustness under numerous challenging circumstances. When carrying bags and wearing a coat while walking, it obtains accuracy on the CASIA-B of 90.8% and 70.3%, respectively, greatly surpassing the best approach currently in use. Additionally, the suggested method achieves a satisfactory level of accuracy even when there are few frames available in the test samples; for instance, it achieves 85.0% on the CASIA-B even with only 7 frames

    Signature Verification through Pattern Recognition

    Get PDF
    The signature is being used as a personal authentication this fact forces the need for an automatic verification system. The verification can be done either Offline or Online based on the application that is to be prepared. The Online systems use dynamic characteristics of a signature captured at the time the signature is made. While Offline systems work on the scanned image of a signature.[4[We have worked on the Offline Verification of signatures using a set of shape based geometric characteristics. Features that are used are Baseline Slant Angle, Aspect Ratio, Normalized Area, Center of Gravity, number of edge points, number of cross points, and the Slope of the line joining the Centers of Gravity of two halves of a signature scanned image. Pre-processing of a scanned image is necessary to differentiate the signature part and to remove any spurious noise present, before extracting the features. [4] The system is initially trained using a database of signatures acquired from those individuals whose signatures have to be authenticated by the system. For each subject a average signature is obtained integrating the above features derived from a set of his/her genuine sample signatures. This average signature acts as the basis for verification against a claimed test signature. Euclidian distance in the feature space between the claimed signature and the template serves as a measure of similarity between the two. If this distance is less than a pre-defined threshold (corresponding to minimum acceptable degree of similarity), the test signature is verified to be that of the claimed subject else detected as a forgery.[4] The details of pre-processing as well as the features depicted above are described in the report along with the implementation details and simulation results.[4

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Tricks and tracks in removal of emerging contaminants from the wastewater through hybrid treatment systems: A review

    No full text

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    No full text
    corecore